perm filename CIRCUM.AB2[F79,JMC] blob
sn#489985 filedate 1979-12-18 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "memo.pub[let,jmc]" source
C00003 00003 .cb FORMALIZATION OF OCKHAM'S RAZOR
C00013 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source
.item←0
.at "qi" ⊂"%8r%*"⊃
.at "qe" ⊂"%8q%*"⊃
.font C "zero30"
.at "Goedel" ⊂"G%C:%*odel"⊃
.at "qx" ⊂"%C¬%*x"⊃
.at "qy" ⊂"%C¬%*y"⊃
.at "qF" ⊂"%AF%*"⊃
.at "qG" ⊂"%AY%*"⊃
.turn on "_" for "#"
.turn on "α"
.turn on "↑[]"
.turn on "→"
.cb FORMALIZATION OF OCKHAM'S RAZOR
Research in artificial intelligence involves two components
which may be called ⊗epistemology and ⊗heuristics. Heuristics deals
with search of spaces of possibilities, with pattern matching, planning
and many other topics. The epistemological part of AI, which seems
to be contiguous with some of the epistemological problems of philosophy,
is concerned with what are the general facts of the common sense world
and how they may be represented in the memory of a computer, how the
particular facts of a situation are consequences of observation or
communication and how they may be represented, and with the rules of
inference or conjecture, whereby an organism or machine may
justifiably go from premisses to conclusions.
The work described here arises
in trying to make computers reason, i.e. draw conclusions from facts,
in common sense situations.
An early approach (McCarthy 1960), which had partial success
and whose ideas have led to other approaches, is to represent
facts about the world in general and about particular situations by
sentences of some system of mathematical logic. First order languages
have been mainly used, because they are best understood and because
set theory, which is as powerful as any system of logic, can be formulated
as a first order theory.
While the facts of particular situations are often conveniently
represented in first order formalisms, it eventually
became clear that not all human reasoning, and not all reasoning
that computers must do to behave intelligently, can be modelled
by the rules of inference of a system of mathematical logic.
It becomes necessary to supplement the valid %2rules of inference%1
of logic with %2rules of conjecture%1. However, it turns out that
some of these rules of conjecture admit mathematical formalization as
precise as that of the rules of inference of logic.
The problem is that we often draw conclusions in the absence
of certain facts that we would not draw in their presence. In particular,
we often presume that the objects explicitly mentioned, together with the
objects whose existence can be deduced, are all the objects there are of
a given kind. This is a kind of Ockham's razor, although, because of a
change in the formalism I shall discuss, the parallel is less striking
than when the title of the lecture was chosen.
The phenomenon that new facts invalidate correctly deduced
conclusions violates a monotonicity property posessed by all systems of
mathematical logic. Symbolically ⊗monotonicity is expressed by
.begin nofill narrow 8; select 2;
A qi p
A ⊂ B
______
B qi p
.end
Here ⊗A and ⊗B are sets of sentences, ⊗p is a sentence, and
%2A_qi_p%1 is read %2"p is logically deducible from A"%1. The point
is that if ⊗p is a logical consequence of a collection ⊗A of sentences
and ⊗B is a more inclusive collection of sentences, then ⊗p is a
logical consequence of ⊗B. Indeed the proof of ⊗p from ⊗A will also
serve as a proof of ⊗p from ⊗B.
Monotonicity is a characteristic of all present logical systems, and
maybe all possible logical systems that do only reasoning guaranteed
to be correct.
That much ordinary reasoning is not monotonic is best
seen with the aid of an example. I know that you own a car, and
I conclude that it is appropriate to ask you for a ride to the airport.
Before asking, however, I learn that your car is being repaired and
conclude that it is not appropriate. Then I learn that you have your
wife's car, etc., etc. No new fact contradicts the previous collection
of facts, but each new fact reverses the appropriateness of asking for a
ride.
One can attempt to deal with this phenomenon in various ways,
and perhaps the most congenial to my preconceptions of the preconceptions
of this audience would be to introduce probabilities. For reasons we
shall discuss later, we don't believe this meets the problem head on,
and I shall discuss two non-probabilistic approaches to non-monotonic
reasoning that have been taken by researchers in artificial intelligence.
Both approaches are discussed in articles in a forthcoming issue of the
journal %2Artificial Intelligence%1. See references
(McDermott and Doyle 1980), (Reiter 1980)
and (McCarthy 1980).
Suppose we have a binary (say) predicate ⊗P(x,y) axiomatized
by a sentence ⊗Axiom(P). The conjecture that ⊗P applies to the smallest
possible set of pairs consistent with its satisfying ⊗Axiom(P) can be
expressed by the schema
%2Axiom(qF) ∧ ∀x y.(qF(x,y) ⊃ P(x,y)) ⊃ ∀x y.(P(x,y) ⊃ qF(x,y))%1.
Here %2qF(x,y)%1 can be an arbitrary wff with free variables ⊗x and ⊗y.
Another application of non-monotonic reasoning may be used to
express the idea that %2it isn't safe to cross the tracks if there may be
a train on them%1. Suppose the predicate ⊗on used in the expression
⊗on(train, tracks) is axiomatized by ⊗Axiom(on). The italicized statement
is then expressed by the schema
%2Axiom(qF) ∧ qF(train,tracks) ⊃ ¬safe-to-cross(tracks)%1.
Some of the mathematical properties of these modes of non-monotonic
reasoning as well as some applications to common sense reasoning will
be described in the paper.
References:
%3McCarthy, John (1960)%1: "Programs with Common Sense", %2Proceedings of
the Teddington Conference on the Mechanization of Thought Processes%1, H.M.
Stationery Office, 1960.
%3McCarthy, John (1980)%1: "Circumscription - A Form of Non-Monotonic
Reasoning", %2Artificial Intelligence%1, forthcoming.
%3McDermott, Drew and Jon Doyle (1980): "Non-Monotonic Logic I",
%2Artificial Intelligence%1, forthcoming.
%3Reiter, Raymond (1980)%1: "A Logic for Default Reasoning",
%2Artificial Intelligence%1, forthcoming.